101 research outputs found

    A vision-based monitoring system for very early automatic detection of forest fires

    Get PDF
    Trabajo presentado a la "I International Conference on Modelling, Monitoring and Management of Forest Fires" celebrada en Toledo del 17 al 19 de Septiembre de 2008.International Conference on Modelling, Monitoring and Management of Forest Fires I This paper describes a system capable of detecting smoke at the very beginning of a forest fire with a precise spatial resolution. The system is based on a wireless vision sensor network. Each sensor monitors a small area of vegetation by running on-site a tailored vision algorithm to detect the presence of smoke. This algorithm examines chromaticity changes and spatio-temporal patterns in the scene that are characteristic of the smoke dynamics at early stages of propagation. Processing takes place at the sensor nodes and, if that is the case, an alarm signal is transmitted through the network along with a reference to the location of the triggered zone - without requiring complex GIS systems. This method improves the spatial resolution on the surveilled area and reduces the rate of false alarms. An energy efficient implementation of the sensor/processor devices is crucial as it determines the autonomy of the network nodes. At this point, we have developed an ad hoc vision algorithm, adapted to the nature of the problem, to be integrated into a single-chip sensor/processor. As a first step to validate the feasibility of the system, we applied the algorithm to smoke sequences recorded with commercial cameras at real-world scenarios that simulate the working conditions of the network nodes. The results obtained point to a very high reliability and robustness in the detection process.This work is funded by Junta de Andalucía (CICE) through project 2006-TIC-2352.Peer Reviewe

    Review of ADCs for imaging

    Get PDF
    The aim of this article is to guide image sensors designers to optimize the analog-to-digital conversion of pixel outputs. The most common ADCs topologies for image sensors are presented and discussed. The ADCs specific requirements for these sensors are analyzed and quantified. Finally, we present relevant recent contributions of specific ADCs for image sensors and we compare them using a novel FOM. © (2014) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use onlyPeer reviewe

    Impact of Thermal Throttling on Long-Term Visual Inference in a CPU-Based Edge Device

    Get PDF
    Many application scenarios of edge visual inference, e.g., robotics or environmental monitoring, eventually require long periods of continuous operation. In such periods, the processor temperature plays a critical role to keep a prescribed frame rate. Particularly, the heavy computational load of convolutional neural networks (CNNs) may lead to thermal throttling and hence performance degradation in few seconds. In this paper, we report and analyze the long-term performance of 80 different cases resulting from running five CNN models on four software frameworks and two operating systems without and with active cooling. This comprehensive study was conducted on a low-cost edge platform, namely Raspberry Pi 4B (RPi4B), under stable indoor conditions. The results show that hysteresis-based active cooling prevented thermal throttling in all cases, thereby improving the throughput up to approximately 90% versus no cooling. Interestingly, the range of fan usage during active cooling varied from 33% to 65%. Given the impact of the fan on the power consumption of the system as a whole, these results stress the importance of a suitable selection of CNN model and software components. To assess the performance in outdoor applications, we integrated an external temperature sensor with the RPi4B and conducted a set of experiments with no active cooling in a wide interval of ambient temperature, ranging from 22 °C to 36 °C. Variations up to 27.7% were measured with respect to the maximum throughput achieved in that interval. This demonstrates that ambient temperature is a critical parameter in case active cooling cannot be appliedPeer reviewe

    Image Feature Extraction Acceleration

    Get PDF
    Image feature extraction is instrumental for most of the best-performing algorithms in computer vision. However, it is also expensive in terms of computational and memory resources for embedded systems due to the need of dealing with individual pixels at the earliest processing levels. In this regard, conventional system architectures do not take advantage of potential exploitation of parallelism and distributed memory from the very beginning of the processing chain. Raw pixel values provided by the front-end image sensor are squeezed into a high-speed interface with the rest of system components. Only then, after deserializing this massive dataflow, parallelism, if any, is exploited. This chapter introduces a rather different approach from an architectural point of view. We present two Application-Specific Integrated Circuits (ASICs) where the 2-D array of photo-sensitive devices featured by regular imagers is combined with distributed memory supporting concurrent processing. Custom circuitry is added per pixel in order to accelerate image feature extraction right at the focal plane. Specifically, the proposed sensing-processing chips aim at the acceleration of two flagships algorithms within the computer vision community: the Viola-Jones face detection algorithm and the Scale Invariant Feature Transform (SIFT). Experimental results prove the feasibility and benefits of this architectural solution.Ministerio de Economía y Competitividad TEC2012-38921-C02, IPT-2011- 1625-430000, IPC-20111009Junta de Andalucía TIC 2338-2013Xunta de Galicia EM2013/038Office of NavalResearch (USA) N00014141035

    Pixel-wise parameter adaptation for single-exposure extension of the image dynamic range

    Get PDF
    High dynamic range imaging is central in application fields like surveillance, intelligent transportation and advanced driving assistance systems. In some scenarios, methods for dynamic range extension based on multiple captures have shown limitations in apprehending the dynamics of the scene. Artifacts appear that can put at risk the correct segmentation of objects in the image. We have developed several techniques for the on-chip implementation of single-exposure extension of the dynamic range. We work on the upper extreme of the range, i. e. administering the available full-well capacity. Parameters are adapted pixel-wise in order to accommodate a high intra-scene range of illuminationsPeer reviewe

    Focal-plane generation of multi-resolution and multi-scale image representation for low-power vision applications

    Get PDF
    Comunicación presentada al "XXXVII Infrared Technology and Applications" celebrado en Orlando (USA) el 25 de Abril del 2011.Early vision stages represent a considerably heavy computational load. A huge amount of data needs to be processed under strict timing and power requirements. Conventional architectures usually fail to adhere to the specifications in many application fields, especially when autonomous vision-enabled devices are to be implemented, like in lightweight UAVs, robotics or wireless sensor networks. A bioinspired architectural approach can be employed consisting of a hierarchical division of the processing chain, conveying the highest computational demand to the focal plane. There, distributed processing elements, concurrent with the photosensitive devices, influence the image capture and generate a pre-processed representation of the scene where only the information of interest for subsequent stages remains. These focal-plane operators are implemented by analog building blocks, which may individually be a little imprecise, but as a whole render the appropriate image processing very efficiently. As a proof of concept, we have developed a 176x144-pixel smart CMOS imager that delivers lighter but enriched representations of the scene. Each pixel of the array contains a photosensor and some switches and weighted paths allowing reconfigurable resolution and spatial filtering. An energy-based image representation is also supported. These functionalities greatly simplify the operation of the subsequent digital processor implementing the high level logic of the vision algorithm. The resulting figures, 5.6m W@30fps, permit the integration of the smart image sensor with a wireless interface module (Imote2 from Memsic Corp.) for the development of vision-enabled WSN applications.This work is partially funded by the Andalusian regional government (Junta de Andalucía-CICE) through project 2006-TIC-2352 and the Spanish Ministry of Science (MICINN) through project TEC 2009-11812, co-funded by the European Regional Development Fund, and also supported by the Office of Naval Research (USA), through grant N000141110312.Peer Reviewe

    On-site forest fire smoke detection by low-power autonomous vision sensor

    Get PDF
    Trabajo presentado a la VI International Conference on Forest Fire Research celebrada en Coimbra (Portugal) del 15 al 18 de noviembre de 2010.Early detection plays a crucial role to prevent forest fires from spreading. Wireless vision sensor networks deployed throughout high-risk areas can perform fine-grained surveillance and thereby very early detection and precise location of forest fires. One of the fundamental requirements that need to be met at the network nodes is reliable low-power on-site image processing. It greatly simplifies the communication infrastructure of the network as only alarm signals instead of complete images are transmitted, anticipating thus a very competitive cost. As a first approximation to fulfill such a requirement, this paper reports the results achieved from field tests carried out in collaboration with the Andalusian Fire-Fighting Service (INFOCA). Two controlled burns of forest debris were realized (www.youtube.com/user/vmoteProject). Smoke was successfully detected on-site by the EyeRISTM v1.2, a general-purpose autonomous vision system, built by AnaFocus Ltd., in which a vision algorithm was programmed. No false alarm was triggered despite the significant motion other than smoke present in the scene. Finally, as a further step, we describe the preliminary laboratory results obtained from a prototype vision chip which implements, at very low energy cost, some image processing primitives oriented to environmental monitoring.This work is funded by CICE/JA and MICINN (Spain) through projects 2006-TIC-2352 and TEC2009-11812 respectively.Peer Reviewe

    Demo: Results of 'iCaveats', a Project on the Integration of Architectures and Components for Embedded Vision

    Get PDF
    iCaveats is a Project on the integration of components and architectures for embedded vision in transport and security applications. A compact and efficient implementation of autonomous vision systems is difficult to be accomplished by using the conventional image processing chain. In this project we have targeted alternative approaches, that exploit the inherent parallelism in the visual stimulus, and hierarchical multilevel optimization. A set of demos showcase the advances at sensor level, in adapted architectures for signal processing and in power management and energy harvesting.Ministerio de Economia, Industria y Competitividad de España (MINECO) y el Fondo Europeo de Desarrollo de las Regiones (FEDER)-‘iCaveats’ TEC2015-66878-C3-1-R, TEC2015-66878-C3-2-R y TEC2015-66878-C3-3-RJunta de Andalucía-‘SmartCIS3D’ TIC 2338-2013FEDER- 2016-2019, ED431G/08 y 2017-2020, ED431C 2017/69Agencia Ejecutiva Europea de Investigación (EU-REA)-‘Achieve’ H2020 MSCAITN 2017 N° 76586

    Varespladib and cardiovascular events in patients with an acute coronary syndrome: the VISTA-16 randomized clinical trial

    Get PDF
    IMPORTANCE: Secretory phospholipase A2(sPLA2) generates bioactive phospholipid products implicated in atherosclerosis. The sPLA2inhibitor varespladib has favorable effects on lipid and inflammatory markers; however, its effect on cardiovascular outcomes is unknown. OBJECTIVE: To determine the effects of sPLA2inhibition with varespladib on cardiovascular outcomes. DESIGN, SETTING, AND PARTICIPANTS: A double-blind, randomized, multicenter trial at 362 academic and community hospitals in Europe, Australia, New Zealand, India, and North America of 5145 patients randomized within 96 hours of presentation of an acute coronary syndrome (ACS) to either varespladib (n = 2572) or placebo (n = 2573) with enrollment between June 1, 2010, and March 7, 2012 (study termination on March 9, 2012). INTERVENTIONS: Participants were randomized to receive varespladib (500 mg) or placebo daily for 16 weeks, in addition to atorvastatin and other established therapies. MAIN OUTCOMES AND MEASURES: The primary efficacy measurewas a composite of cardiovascular mortality, nonfatal myocardial infarction (MI), nonfatal stroke, or unstable angina with evidence of ischemia requiring hospitalization at 16 weeks. Six-month survival status was also evaluated. RESULTS: At a prespecified interim analysis, including 212 primary end point events, the independent data and safety monitoring board recommended termination of the trial for futility and possible harm. The primary end point occurred in 136 patients (6.1%) treated with varespladib compared with 109 patients (5.1%) treated with placebo (hazard ratio [HR], 1.25; 95%CI, 0.97-1.61; log-rank P = .08). Varespladib was associated with a greater risk of MI (78 [3.4%] vs 47 [2.2%]; HR, 1.66; 95%CI, 1.16-2.39; log-rank P = .005). The composite secondary end point of cardiovascular mortality, MI, and stroke was observed in 107 patients (4.6%) in the varespladib group and 79 patients (3.8%) in the placebo group (HR, 1.36; 95% CI, 1.02-1.82; P = .04). CONCLUSIONS AND RELEVANCE: In patients with recent ACS, varespladib did not reduce the risk of recurrent cardiovascular events and significantly increased the risk of MI. The sPLA2inhibition with varespladib may be harmful and is not a useful strategy to reduce adverse cardiovascular outcomes after ACS. TRIAL REGISTRATION: clinicaltrials.gov Identifier: NCT01130246. Copyright 2014 American Medical Association. All rights reserved

    A Functional Misexpression Screen Uncovers a Role for Enabled in Progressive Neurodegeneration

    Get PDF
    Drosophila is a well-established model to study the molecular basis of neurodegenerative diseases. We carried out a misexpression screen to identify genes involved in neurodegeneration examining locomotor behavior in young and aged flies. We hypothesized that a progressive loss of rhythmic activity could reveal novel genes involved in neurodegenerative mechanisms. One of the interesting candidates showing progressive arrhythmicity has reduced enabled (ena) levels. ena down-regulation gave rise to progressive vacuolization in specific regions of the adult brain. Abnormal staining of pre-synaptic markers such as cystein string protein (CSP) suggest that axonal transport could underlie the neurodegeneration observed in the mutant. Reduced ena levels correlated with increased apoptosis, which could be rescued in the presence of p35, a general Caspase inhibitor. Thus, this mutant recapitulates two important features of human neurodegenerative diseases, i.e., vulnerability of certain neuronal populations and progressive degeneration, offering a unique scenario in which to unravel the specific mechanisms in an easily tractable organism
    corecore